CMSC702 Sparsity with L1 penalties
نویسنده
چکیده
Although the least squares estimate is the linear unbiased estimate with minimum variance, it is possible that a biased estimate will give us a better mean squared error. Consider a case where the true model is Y = β0 + β1X1 + β2X2 + and that X1 and X2 are almost perfectly correlated (statisticians say X1 and X2 are co-linear). What happens if we leave X2 out? Then the model is very well approximated by
منابع مشابه
Convex envelopes of complexity controlling penalties: the case against premature envelopment
Convex envelopes of the cardinality and rank function, l1 and nuclear norm, have gained immense popularity due to their sparsity inducing properties. This has given rise to a natural approach to building objectives with sparse optima whereby such convex penalties are added to another objective. Such a heuristic approach to objective building does not always work. For example, addition of an L1 ...
متن کاملNon-convex Penalties with Analytical Solutions for One-bit Compressive Sensing
One-bit measurements widely exist in the real world and can be used to recover sparse signals. This task is known as one-bit compressive sensing (1bit-CS). In this paper, we propose novel algorithms based on both convex and nonconvex sparsity-inducing penalties for robust 1bit-CS. We consider the dual problem, which has only one variable and provides a sufficient condition to verify whether a s...
متن کاملStructured sparsity with convex penalty functions
We study the problem of learning a sparse linear regression vector under additional conditions on the structure of its sparsity pattern. This problem is relevant in Machine Learning, Statistics and Signal Processing. It is well known that a linear regression can benefit from knowledge that the underlying regression vector is sparse. The combinatorial problem of selecting the nonzero components ...
متن کاملReciprocity-driven Sparse Network Formation
A resource exchange network is considered, where exchanges among nodes are based on reciprocity. Peers receive from the network an amount of resources commensurate with their contribution. We assume the network is fully connected, and impose sparsity constraints on peer interactions. Finding the sparsest exchanges that achieve a desired level of reciprocity is in general NP-hard. To capture nea...
متن کاملFeature Selection and Cancer Classification via Sparse Logistic Regression with the Hybrid L1/2 +2 Regularization
Cancer classification and feature (gene) selection plays an important role in knowledge discovery in genomic data. Although logistic regression is one of the most popular classification methods, it does not induce feature selection. In this paper, we presented a new hybrid L1/2 +2 regularization (HLR) function, a linear combination of L1/2 and L2 penalties, to select the relevant gene in the lo...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014